How to keep the HG weights non-negative: the truncated Perceptron reweighing rule
نویسنده
چکیده
The literature on error-driven learning in Harmonic Grammar (HG) has adopted the Perceptron reweighing rule. Yet, this rule is not suited to HG, as it fails at ensuring non-negative weights. A variant is thus considered which truncates the updates at zero, keeping the weights non-negative. Convergence guarantees and error bounds for the original Perceptron are shown to extend to its truncated variant.
منابع مشابه
Error-driven Learning in Harmonic Grammar
The HG literature has adopted so far the Perceptron reweighing rule because of its convergence guarantees. Yet, this rule is not suited to HG, as it fails at ensuring non-negativity of the weights. The first contribution of this paper is a solution to this impasse. I consider a variant of the Perceptron which truncates any update at zero, thus maintaining the weights non-negative in a principle...
متن کاملNoise robustness and stochastic tolerance of OT error-driven ranking algorithms
Recent counterexamples show that Harmonic Grammar (HG) error-driven learning (with the classical Perceptron reweighing rule) is not robust to noise and does not tolerate the stochastic implementation (Magri 2014, MS). This article guarantees that no analogous counterexamples are possible for proper Optimality Theory (OT) error-driven learners. In fact, a simple extension of the OT convergence a...
متن کاملDiscrete All-positive Multilayer Perceptrons for Optical Implementation Discrete All-positive Multilayer Perceptrons for Optical Implementation
All-optical multilayer perceptrons diier in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the limited accuracy of the weights. In this paper, a backpropagation-based learning rule is presented that compensates...
متن کاملFinite convergence of a fuzzy δ rule for a fuzzy perceptron
A learning algorithm based on a fuzzy δ rule is proposed for a fuzzy perceptron with the same topological structure as conventional linear perceptrons. The inner operations involved in the working process of this fuzzy perceptron are based on the max-min logical operations rather than conventional multiplication and summation etc. The initial values of the network weights are fixed as 1. Each v...
متن کاملA "Thermal" Perceptron Learning Rule
The thermal perceptron is a simple extension to Rosenblatt’s perceptron learning rule for training individual linear threshold units. It finds stable weights for nonseparable problems as well as separable ones. Experiments indicate that if a good initial setting for a temperature parameter, To, has been found, then the thermal perceptron outperforms the Pocket algorithm and methods based on gra...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- J. Language Modelling
دوره 3 شماره
صفحات -
تاریخ انتشار 2015